A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices

نویسندگان

  • Wataru Matsumoto
  • Manabu Hagiwara
  • Petros Boufounos
  • Kunihiko Fukushima
  • Toshisada Mariyama
  • Xiongxin Zhao
چکیده

We present a new deep neural network architecture, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder. We regard autoencoders as an information-preserving dimensionality reduction method, similar to random projections in compressed sensing. Thus, exploiting recent theory on sparse matrices for dimensionality reduction, we demonstrate experimentally that classification performance does not deteriorate if the autoencoder is replaced with a computationally-efficient sparse dimensionality reduction matrix. International Conference on Neural Information Processing (ICONIP) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2016 201 Broadway, Cambridge, Massachusetts 02139 A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices Wataru Matsumoto, Manabu Hagiwara, Petros T. Boufounos , Kunihiko Fukushima , Toshisada Mariyama, Zhao Xiongxin 1 Mitsubishi Electric Corporation, Information Technology R&D Center, Kanagawa, Japan 2 Chiba University, Chiba, Japan 3 Mitsubishi Electric Research Laboratories, Cambridge, MA, USA 4 Fuzzy Logic System Institute, Fukuoka, Japan Abstract. We present a new deep neural network architecture, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder. We regard autoencoders as an information-preserving dimensionality reduction method, similar to random projections in compressed sensing. Thus, exploiting recent theory on sparse matrices for dimensionality reduction, we demonstrate experimentally that classification performance does not deteriorate if the autoencoder is replaced with a computationally-efficient sparse dimensionality reduction matrix. We present a new deep neural network architecture, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder. We regard autoencoders as an information-preserving dimensionality reduction method, similar to random projections in compressed sensing. Thus, exploiting recent theory on sparse matrices for dimensionality reduction, we demonstrate experimentally that classification performance does not deteriorate if the autoencoder is replaced with a computationally-efficient sparse dimensionality reduction matrix.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse Autoencoders in Sentiment Analysis

This paper examines the utilization of sparse autoencoders in the task of sentiment analysis. The autoencoders can be used for pre-training a deep neural network, discovering new features or for dimensionality reduction. In this paper, sparse autoencoders were used for parameters initialization in deep neural network. Experiments showed that the accuracy of text classification to a particular s...

متن کامل

Short and Deep: Sketching and Neural Networks

Data-independent methods for dimensionality reduction such as random projections, sketches, and feature hashing have become increasingly popular in recent years. These methods often seek to reduce dimensionality while preserving the hypothesis class, resulting in inherent lower bounds on the size of projected data. For example, preserving linear separability requires Ω(1/γ) dimensions, where γ ...

متن کامل

Learned D-AMP: Principled Neural Network based Compressive Image Recovery

Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes ...

متن کامل

A Reconfigurable Low Power High Throughput Streaming Architecture for Big Data Processing

—General purpose computing systems are used for a large variety of applications. Extensive supports for flexibility in these systems limit their energy efficiencies. Neural networks, including deep networks, are widely used for signal processing and pattern recognition applications. In this paper we propose a multicore architecture for deep neural network based processing. Memristor crossbars a...

متن کامل

Deep Sparse Coding Using Optimized Linear Expansion of Thresholds

We address the problem of reconstructing sparse signals from noisy and compressive measurements using a feed-forward deep neural network (DNN) with an architecture motivated by the iterative shrinkage-thresholding algorithm (ISTA). We maintain the weights and biases of the network links as prescribed by ISTA and model the nonlinear activation function using a linear expansion of thresholds (LET...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016